33 research outputs found

    Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis

    Full text link
    The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems. Such supervised approaches, however, are difficult to implement in the medical domain where large volumes of labelled data are difficult to obtain due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. We propose a new convolutional sparse kernel network (CSKN), which is a hierarchical unsupervised feature learning framework that addresses the challenge of learning representative visual features in medical image analysis domains where there is a lack of annotated training data. Our framework has three contributions: (i) We extend kernel learning to identify and represent invariant features across image sub-patches in an unsupervised manner. (ii) We initialise our kernel learning with a layer-wise pre-training scheme that leverages the sparsity inherent in medical images to extract initial discriminative features. (iii) We adapt a multi-scale spatial pyramid pooling (SPP) framework to capture subtle geometric differences between learned visual features. We evaluated our framework in medical image retrieval and classification on three public datasets. Our results show that our CSKN had better accuracy when compared to other conventional unsupervised methods and comparable accuracy to methods that used state-of-the-art supervised convolutional neural networks (CNNs). Our findings indicate that our unsupervised CSKN provides an opportunity to leverage unannotated big data in medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis'). The manuscript is available from following link (https://doi.org/10.1016/j.media.2019.06.005

    SSPT-bpMRI: A Self-supervised Pre-training Scheme for Improving Prostate Cancer Detection and Diagnosis in Bi-parametric MRI*

    Get PDF
    Prostate cancer (PCa) is one of the most prevalent cancers in men. Early diagnosis plays a pivotal role in reducing the mortality rate from clinically significant PCa (csPCa). In recent years, bi-parametric magnetic resonance imaging (bpMRI) has attracted great attention for the detection and diagnosis of csPCa. bpMRI is able to overcome some limitations of multi-parametric MRI (mpMRI) such as the use of contrast agents, the time-consuming for imaging and the costs, and achieve detection performance comparable to mpMRI. However, inter-reader agreements are currently low for prostate MRI. Advancements in artificial intelligence (AI) have propelled the development of deep learning (DL)-based computer-aided detection and diagnosis system (CAD). However, most of the existing DL models developed for csPCa identification are restricted by the scale of data and the scarcity in labels. In this paper, we propose a self-supervised pre-training scheme named SSPT-bpMRI with an image restoration pretext task integrating four different image transformations to improve the performance of DL algorithms. Specially, we explored the potential value of the self-supervised pre-training in fully supervised and weakly supervised situations. Experiments on the publicly available PI-CAI dataset demonstrate that our model outperforms the fully supervised or weakly supervised model alone

    Development of a risk predictive scoring system to identify patients at risk of representation to emergency department: a retrospective population-based analysis in Australia

    Get PDF
    Objective To examine the characteristics of frequent visitors (FVs) to emergency departments (EDs) and develop a predictive model to identify those with high risk of a future representations to ED among younger and general population (aged ≤70 years). Design and setting A retrospective analysis of ED data targeting younger and general patients (aged ≤70 years) were collected between 1 January 2009 and 30 June 2016 from a public hospital in Australia. Participants A total of 343 014 ED presentations were identified from 170 134 individual patients. Main outcome measures Proportion of FVs (those attending four or more times annually), demographic characteristics (age, sex, indigenous and marital status), mode of separation (eg, admitted to ward), triage categories, time of arrival to ED, referral on departure and clinical conditions. Statistical estimates using a mixed-effects model to develop a risk predictive scoring system. Results The FVs were characterised by young adulthood (32.53%) to late-middle (26.07%) aged patients with a higher proportion of indigenous (5.7%) and mental health-related presentations (10.92%). They were also more likely to arrive by ambulance (36.95%) and leave at own risk without completing their treatments (9.8%). They were also highly associated with socially disadvantage groups such as people who have been divorced, widowed or separated (12.81%). These findings were then used for the development of a predictive model to identify potential FVs. The performance of our derived risk predictive model was favourable with an area under the receiver operating characteristic (ie, C-statistic) of 65.7%. Conclusion The development of a demographic and clinical profile of FVs coupled with the use of predictive model can highlight the gaps in interventions and identify new opportunities for better health outcome and planning

    Unsupervised Deep Transfer Feature Learning for Medical Image Classification

    Full text link
    The accuracy and robustness of image classification with supervised deep learning are dependent on the availability of large-scale, annotated training data. However, there is a paucity of annotated data available due to the complexity of manual annotation. To overcome this problem, a popular approach is to use transferable knowledge across different domains by: 1) using a generic feature extractor that has been pre-trained on large-scale general images (i.e., transfer-learned) but which not suited to capture characteristics from medical images; or 2) fine-tuning generic knowledge with a relatively smaller number of annotated images. Our aim is to reduce the reliance on annotated training data by using a new hierarchical unsupervised feature extractor with a convolutional auto-encoder placed atop of a pre-trained convolutional neural network. Our approach constrains the rich and generic image features from the pre-trained domain to a sophisticated representation of the local image characteristics from the unannotated medical image domain. Our approach has a higher classification accuracy than transfer-learned approaches and is competitive with state-of-the-art supervised fine-tuned methods.Comment: 4 pages, 1 figure, 3 tables, Accepted (Oral) as IEEE International Symposium on Biomedical Imaging 201

    Step-wise Integration of Deep Class-specific Learning for Dermoscopic Image Segmentation

    Get PDF
    The segmentation of abnormal regions on dermoscopic images is an important step for automated computer aided diagnosis (CAD) of skin lesions. Recent methods based on fully convolutional networks (FCN) have been very successful for dermoscopic image segmentation. However, they tend to overfit to the visual characteristics that are present in the dominant non-melanoma studies and therefore, perform poorly on the complex visual characteristics exhibited by melanoma studies, which usually consists of fuzzy boundaries and heterogeneous textures. In this paper, we propose a new method for automated skin lesion segmentation that overcomes these limitations via a novel deep class-specific learning approach which learns the important visual characteristics of the skin lesions of each individual class (melanoma vs non-melanoma) on an individual basis. We also introduce a new probability-based, step-wise integration to combine complementary segmentation results derived from individual class-specific learning models. We achieved an average Dice coefficient of 85.66% on the ISBI 2017 Skin Lesion Challenge (SLC), 91.77% on the ISBI 2016 SLC and 92.10% on the PH2 datasets with corresponding Jaccard indices of 77.73%, 85.92% and 85.90%, respectively, for the same datasets. Our experiments on three well-established public benchmark datasets demonstrate that our method is more effective than other state-of-the-art methods for skin lesion segmentation

    A Spatiotemporal Volumetric Interpolation Network for 4D Dynamic Medical Image

    Full text link
    Dynamic medical imaging is usually limited in application due to the large radiation doses and longer image scanning and reconstruction times. Existing methods attempt to reduce the dynamic sequence by interpolating the volumes between the acquired image volumes. However, these methods are limited to either 2D images and/or are unable to support large variations in the motion between the image volume sequences. In this paper, we present a spatiotemporal volumetric interpolation network (SVIN) designed for 4D dynamic medical images. SVIN introduces dual networks: first is the spatiotemporal motion network that leverages the 3D convolutional neural network (CNN) for unsupervised parametric volumetric registration to derive spatiotemporal motion field from two-image volumes; the second is the sequential volumetric interpolation network, which uses the derived motion field to interpolate image volumes, together with a new regression-based module to characterize the periodic motion cycles in functional organ structures. We also introduce an adaptive multi-scale architecture to capture the volumetric large anatomy motions. Experimental results demonstrated that our SVIN outperformed state-of-the-art temporal medical interpolation methods and natural video interpolation methods that have been extended to support volumetric images. Our ablation study further exemplified that our motion network was able to better represent the large functional motion compared with the state-of-the-art unsupervised medical registration methods.Comment: 10 pages, 8 figures, Conference on Computer Vision and Pattern Recognition (CVPR) 202

    The Checkpoint Program: collaborative care to reduce the reliance of frequent presenters on ED

    Get PDF
    Introduction: Growing pressures upon Emergency Departments [ED] call for new ways of working with frequent presenters who, although small in number, place extensive demands on services, to say nothing of the costs and consequences for the patients themselves. EDs are often poorly equipped to address the multi-dimensional nature of patient need and the complex circumstances surrounding repeated presentation. Employing a model of intensive short-term community-based case management, the Checkpoint program sought to improve care coordination for this patient group, thereby reducing their reliance on ED. Method: This study employed a single group interrupted time series design, evaluating patient engagement with the program and year-on-year individual differences in the number of ED visits pre and post enrolment. Associated savings were also estimated. Results: Prior to intervention, there were two dominant modes in the ED presentation trends of patients. One group had a steady pattern with ≥7 presentations in each of the last four years. The other group had an increasing trend in presentations, peaking in the 12 months immediately preceding enrolment. Following the intervention, both groups demonstrated two consecutive year-on-year reductions. By the second year, and from an overall peak of 22.5 presentations per patient per annum, there was a 53% reduction in presentations. This yielded approximate savings of $7100 per patient. Discussion: Efforts to improve care coordination, when combined with proactive case management in the community, can impact positively on ED re-presentation rates, provided they are concerted, sufficiently intensive and embed the principles of integration. Conclusion: The Checkpoint program demonstrated sufficient promise to warrant further exploration of its sustainability. However, health services have yet to determine the ideal organisational structures and funding arrangements to support such initiativ

    A mobile app and dashboard for early detection of infectious disease outbreaks: development study

    Get PDF
    ©Euijoon Ahn, Na Liu, Tej Parekh, Ronak Patel, Tanya Baldacchino, Tracy Mullavey, Amanda Robinson, Jinman Kim. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 09.03.2021. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Public Health and Surveillance, is properly cited. The complete bibliographic information, a link to the original publication on http://publichealth.jmir.org, as well as this copyright and license information must be included
    corecore